IS

Newsted, Peter R.

Topic Weight Topic Terms
0.624 instrument measurement factor analysis measuring measures dimensions validity based instruments construct measure conceptualization sample reliability
0.587 structural pls measurement modeling equation research formative squares partial using indicators constructs construct statistical models
0.476 validity reliability measure constructs construct study research measures used scale development nomological scales instrument measurement
0.417 adaptive theory structuration appropriation structures technology use theoretical ast capture believe consensus technologies offices context
0.288 effects effect research data studies empirical information literature different interaction analysis implications findings results important
0.230 research information systems science field discipline researchers principles practice core methods area reference relevance conclude
0.194 response responses different survey questions results research activities respond benefits certain leads two-stage interactions study
0.141 intelligence business discovery framework text knowledge new existing visualization based analyzing mining genetic algorithms related
0.132 increased increase number response emergency monitoring warning study reduce messages using reduced decreased reduction decrease
0.132 computing end-user center support euc centers management provided users user services organizations end satisfaction applications
0.127 use question opportunities particular identify information grammars researchers shown conceptual ontological given facilitate new little
0.117 results study research information studies relationship size variables previous variable examining dependent increases empirical variance

Focal Researcher     Coauthors of Focal Researcher (1st degree)     Coauthors of Coauthors (2nd degree)

Note: click on a node to go to a researcher's profile page. Drag a node to reallocate. Number on the edge is the number of co-authorships.

Chin, Wynne W. 4 Gopal, Abhijit 2 Huff, Sid L. 1 Marcolin, Barbara L. 1
Munro, Malcolm C. 1 Salisbury, Wm. David 1 Salisbury, David 1
Structural Equation Modeling 3 Scale Development 2 Adaptive Structuration Theory 1 Causal modeling 1
Confirmatory Factor Analysis 1 End-user computing satisfaction 1 Electronic Meeting Systems 1 Interaction Effects 1
Model specification 1 Measurement Error 1 Moderators 1 PLS 1
questionnaire development. 1 research methodology 1 Structure equation modeling 1 Survey research 1
Technology Appropriation 1

Articles (5)

A Partial Least Squares Latent Variable Modeling Approach for Measuring Interaction Effects: Results from a Monte Carlo Simulation Study and an Electronic-Mail Emotion/Adoption Study. (Information Systems Research, 2003)
Authors: Abstract:
    The ability to detect and accurately estimate the strength of interaction effects are critical issues that are fundamental to social science research in general and IS research in particular. Within the IS discipline, a significant percentage of research has been devoted to examining the conditions and contexts under which relationships may vary, often under the general umbrella of contingency theory (cf. McKeen et al. 1994, Weill and Olson 1989). In our survey of such studies, the majority failed to either detect or provide an estimate of the effect size. In cases where effect sizes are estimated, the numbers are generally small. These results have led some researchers to question both the usefulness of contingency theory and the need to detect interaction effects (e.g., Weill and Olson 1989). This paper addresses this issue by providing a new latent variable modeling approach that can give more accurate estimates of interaction effects by accounting for the measurement error that attenuates the estimated relationships. The capacity of this approach at recovering true effects in comparison to summated regression is demonstrated in a Monte Carlo study that creates a simulated data set in which the underlying true effects are known. Analysis of a second, empirical data set is included to demonstrate the technique's use within IS theory. In this second analysis, substantial direct and interaction effects of enjoyment on electronic-mail adoption are shown to exist.
Authors' Reply to Allport and Kerler (2003). (Information Systems Research, 2003)
Authors: Abstract:
    The article presents a reply to questions raised by Allport and Kerler (A&K) in their research note about theory and data in scale development. With the objective of creating a single scale consistent with an a priori construct definition, we choose principal components analysis as a means for initial data reduction. However, the study was indeed designed to have an initial set of items useful for data reduction or scale purification, as opposed to running tests to immediately suggest valid measures. An author suggested that the only way to evaluate the psychometric properties of the responses to rating scales with both positively and negatively worded items would be to use confirmatory factor analysis and structural equation model methods. Besides wording effects, A&K suggested that a response bias effect based on positive or negative framing might well be another possibility. To aid model improvement, the modification index for a parameter is an estimate of the amount by which the discrepancy function would decrease if the analysis were repeated with the constraints on that parameter removed.
Research Report: Better Theory Through Measurement--Developing a Scale to Capture Consensus on Appropriation. (Information Systems Research, 2002)
Authors: Abstract:
    Proper measurement is critical to the advancement of theory (Blalock 1979). Adaptive Structuration Theory (AST) is rapidly becoming an important theoretical paradigm for comprehending the impacts of advanced information technologies (DeSanctis and Poole 1994). Intended as a complement to the faithfulness of appropriation scale developed by Chin et al. (1997), this research note describes the development of an instrument to capture the AST construct of consensus on appropriation. Consensus on appropriation (COA) is the extent to which group participants perceive that they have agreed on how to adopt and use a technology. While consensus on appropriation is an important component of AST, no scale is currently available to capture this construct. This research note develops a COA instrument in the context of electronic meeting systems use. Initial item development, statistical analyses, and validity assessment (convergent, discriminant, and nomological) are described here in detail. The contribution of this effort is twofold: First, a scale is provided for an important construct from AST. Second, this report serves as an example of rigorous scale development using structural equation modeling. Employing rigorous procedures in the development of instruments to capture AST constructs is critical if the sound theoretical base provided by AST is to be fully exploited in understanding phenomena related to the use of advanced information technologies.
Survey Instruments in Information Systems. (MIS Quarterly, 1998)
Authors: Abstract:
    Due to the popularity of survey research in information systems we have launched a compilation of survey instruments and related information. This work started in 1988, as the disk-based Calgary Surveys Query System, and has now been extended to the world wide web via a contribution of "living scholarship" to MISQ Discovery. This work includes actual IS survey instruments--either in full text or via links to the appropriate citations--as well as introductory information to help get researchers started with the survey methodology.
The Importance of Specification in Casual Modeling: The Case of End-user Computing Satisfaction. (Information Systems Research, 1995)
Authors: Abstract:
    In a survey of IS instruments spanning the years 1973 to 1988 (Zmud and Boynton 1991), Doll and Torkzadeh's (1988) 12-item End-User Computing Satisfaction instrument was reported as one of three IS instruments that met conditions to qualify as "well developed." Recently, Etezadi-Amoli and Farhoomand (1991) questioned the validity of these measures. Part of their critique centered on the poor model fit obtained in a re-analysis using LISREL. While other potentially valid points were raised by Etezadi-Amoli and Farhoomand's critique, this report focuses only on their use of confirmatory factor analysis. In our re-analyses of Doll and Torkzadeh's original covariance measures, we show how model fit is extremely dependent on model specification. While still maintaining the same number of constructs and respective measures, we demonstrate how two alternatives to the original model analyzed by Etezadi-Amoli and Farhoomand can result in models with acceptable fits.